Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Tuberk Toraks ; 71(2): 131-137, 2023 Jun.
Artigo em Turco | MEDLINE | ID: mdl-37345395

RESUMO

Introduction: Pulmonary embolism is a type of thromboembolism seen in the main pulmonary artery and its branches. This study aimed to diagnose acute pulmonary embolism using the deep learning method in computed tomographic pulmonary angiography (CTPA) and perform the segmentation of pulmonary embolism data. Materials and Methods: The CTPA images of patients diagnosed with pulmonary embolism who underwent scheduled imaging were retrospectively evaluated. After data collection, the areas that were diagnosed as embolisms in the axial section images were segmented. The dataset was divided into three parts: training, validation, and testing. The results were calculated by selecting 50% as the cut-off value for the intersection over the union. Result: Images were obtained from 1.550 patients. The mean age of the patients was 64.23 ± 15.45 years. A total of 2.339 axial computed tomography images obtained from the 1.550 patients were used. The PyTorch U-Net was used to train 400 epochs, and the best model, epoch 178, was recorded. In the testing group, the number of true positives was determined as 471, the number of false positives as 35, and 27 cases were not detected. The sensitivity of CTPA segmentation was 0.95, the precision value was 0.93, and the F1 score value was 0.94. The area under the curve value obtained in the receiver operating characteristic analysis was calculated as 0.88. Conclusions: In this study, the deep learning method was successfully employed for the segmentation of acute pulmonary embolism in CTPA, yielding positive outcomes.


Assuntos
Aprendizado Profundo , Embolia Pulmonar , Humanos , Pessoa de Meia-Idade , Idoso , Estudos Retrospectivos , Embolia Pulmonar/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Doença Aguda , Angiografia/métodos
2.
Pol J Radiol ; 87: e516-e520, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36250137

RESUMO

Purpose: Magnetic resonance imaging (MRI) has a special place in the evaluation of orbital and periorbital lesions. Segmentation is one of the deep learning methods. In this study, we aimed to perform segmentation in orbital and periorbital lesions. Material and methods: Contrast-enhanced orbital MRIs performed between 2010 and 2019 were retrospectively screened, and 302 cross-sections of contrast-enhanced, fat-suppressed, T1-weighted, axial MRI images of 95 patients obtained using 3 T and 1.5 T devices were included in the study. The dataset was divided into 3: training, test, and validation. The number of training and validation data was increased 4 times by applying data augmentation (horizontal, vertical, and both). Pytorch UNet was used for training, with 100 epochs. The intersection over union (IOU) statistic (the Jaccard index) was selected as 50%, and the results were calculated. Results: The 77th epoch model provided the best results: true positives, 23; false positives, 4; and false negatives, 8. The pre-cision, sensitivity, and F1 score were determined as 0.85, 0.74, and 0.79, respectively. Conclusions: Our study proved to be successful in segmentation by deep learning method. It is one of the pioneering studies on this subject and will shed light on further segmentation studies to be performed in orbital MR images.

3.
Med Princ Pract ; 31(6): 555-561, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36167054

RESUMO

OBJECTIVE: The purpose of the study was to create an artificial intelligence (AI) system for detecting idiopathic osteosclerosis (IO) on panoramic radiographs for automatic, routine, and simple evaluations. SUBJECT AND METHODS: In this study, a deep learning method was carried out with panoramic radiographs obtained from healthy patients. A total of 493 anonymized panoramic radiographs were used to develop the AI system (CranioCatch, Eskisehir, Turkey) for the detection of IOs. The panoramic radiographs were acquired from the radiology archives of the Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University. GoogLeNet Inception v2 model implemented with TensorFlow library was used for the detection of IOs. Confusion matrix was used to predict model achievements. RESULTS: Fifty IOs were detected accurately by the AI model from the 52 test images which had 57 IOs. The sensitivity, precision, and F-measure values were 0.88, 0.83, and 0.86, respectively. CONCLUSION: Deep learning-based AI algorithm has the potential to detect IOs accurately on panoramic radiographs. AI systems may reduce the workload of dentists in terms of diagnostic efforts.


Assuntos
Aprendizado Profundo , Osteosclerose , Humanos , Inteligência Artificial , Radiografia Panorâmica , Algoritmos , Osteosclerose/diagnóstico por imagem
4.
Biomed Res Int ; 2022: 7035367, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35075428

RESUMO

The purpose of the paper was the assessment of the success of an artificial intelligence (AI) algorithm formed on a deep-convolutional neural network (D-CNN) model for the segmentation of apical lesions on dental panoramic radiographs. A total of 470 anonymized panoramic radiographs were used to progress the D-CNN AI model based on the U-Net algorithm (CranioCatch, Eskisehir, Turkey) for the segmentation of apical lesions. The radiographs were obtained from the Radiology Archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry of Eskisehir Osmangazi University. A U-Net implemented with PyTorch model (version 1.4.0) was used for the segmentation of apical lesions. In the test data set, the AI model segmented 63 periapical lesions on 47 panoramic radiographs. The sensitivity, precision, and F1-score for segmentation of periapical lesions at 70% IoU values were 0.92, 0.84, and 0.88, respectively. AI systems have the potential to overcome clinical problems. AI may facilitate the assessment of periapical pathology based on panoramic radiographs.


Assuntos
Inteligência Artificial , Dente , Algoritmos , Humanos , Redes Neurais de Computação , Radiografia Panorâmica
5.
Dentomaxillofac Radiol ; 51(3): 20210246, 2022 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-34623893

RESUMO

OBJECTIVES: The present study aimed to evaluate the performance of a Faster Region-based Convolutional Neural Network (R-CNN) algorithm for tooth detection and numbering on periapical images. METHODS: The data sets of 1686 randomly selected periapical radiographs of patients were collected retrospectively. A pre-trained model (GoogLeNet Inception v3 CNN) was employed for pre-processing, and transfer learning techniques were applied for data set training. The algorithm consisted of: (1) the Jaw classification model, (2) Region detection models, and (3) the Final algorithm using all models. Finally, an analysis of the latest model has been integrated alongside the others. The sensitivity, precision, true-positive rate, and false-positive/negative rate were computed to analyze the performance of the algorithm using a confusion matrix. RESULTS: An artificial intelligence algorithm (CranioCatch, Eskisehir-Turkey) was designed based on R-CNN inception architecture to automatically detect and number the teeth on periapical images. Of 864 teeth in 156 periapical radiographs, 668 were correctly numbered in the test data set. The F1 score, precision, and sensitivity were 0.8720, 0.7812, and 0.9867, respectively. CONCLUSION: The study demonstrated the potential accuracy and efficiency of the CNN algorithm for detecting and numbering teeth. The deep learning-based methods can help clinicians reduce workloads, improve dental records, and reduce turnaround time for urgent cases. This architecture might also contribute to forensic science.


Assuntos
Inteligência Artificial , Dente , Algoritmos , Humanos , Redes Neurais de Computação , Estudos Retrospectivos
6.
Oral Radiol ; 38(3): 363-369, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34611840

RESUMO

OBJECTIVES: The goal of this study was to develop and evaluate the performance of a new deep-learning (DL) artificial intelligence (AI) model for diagnostic charting in panoramic radiography. METHODS: One thousand eighty-four anonymous dental panoramic radiographs were labeled by two dento-maxillofacial radiologists for ten different dental situations: crown, pontic, root-canal treated tooth, implant, implant-supported crown, impacted tooth, residual root, filling, caries, and dental calculus. AI Model CranioCatch, developed in Eskisehir, Turkey and based on a deep CNN method, was proposed to be evaluated. A Faster R-CNN Inception v2 (COCO) model implemented with the TensorFlow library was used for model development. The assessment of AI model performance was evaluated with sensitivity, precision, and F1 scores. RESULTS: When the performance of the proposed AI model for detecting dental conditions in panoramic radiographs was evaluated, the best sensitivity values were obtained from the crown, implant, and impacted tooth as 0.9674, 0.9615, and 0.9658, respectively. The worst sensitivity values were obtained from the pontic, caries, and dental calculus, as 0.7738, 0.3026, and 0.0934, respectively. The best precision values were obtained from pontic, implant, implant-supported crown as 0.8783, 0.9259, and 0.8947, respectively. The worst precision values were obtained from residual root, caries, and dental calculus, as 0.6764, 0.5096, and 0.1923, respectively. The most successful F1 Scores were obtained from the implant, crown, and implant-supported crown, as 0.9433, 0.9122, and 0.8947, respectively. CONCLUSION: The proposed AI model has promising results at detecting dental conditions in panoramic radiographs, except for caries and dental calculus. Thanks to the improvement of AI models in all areas of dental radiology, we predict that they will help physicians in panoramic diagnosis and treatment planning, as well as digital-based student education, especially during the pandemic period.


Assuntos
Aprendizado Profundo , Dente Impactado , Inteligência Artificial , Cálculos Dentários , Humanos , Radiografia Panorâmica
7.
Oral Radiol ; 38(4): 468-479, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-34807344

RESUMO

OBJECTIVES: The aim of this study is to recommend an automatic caries detection and segmentation model based on the Convolutional Neural Network (CNN) algorithms in dental bitewing radiographs using VGG-16 and U-Net architecture and evaluate the clinical performance of the model comparing to human observer. METHODS: A total of 621 anonymized bitewing radiographs were used to progress the Artificial Intelligence (AI) system (CranioCatch, Eskisehir, Turkey) for the detection and segmentation of caries lesions. The radiographs were obtained from the Radiology Archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry of Ordu University. VGG-16 and U-Net implemented with PyTorch models were used for the detection and segmentation of caries lesions, respectively. RESULTS: The sensitivity, precision, and F-measure rates for caries detection and caries segmentation were 0.84, 0.81; 0.84, 0.86; and 0.84, 0.84, respectively. Comparing to 5 different experienced observers and AI models on external radiographic dataset, AI models showed superiority to assistant specialists. CONCLUSION: CNN-based AI algorithms can have the potential to detect and segmentation of dental caries accurately and effectively in bitewing radiographs. AI algorithms based on the deep-learning method have the potential to assist clinicians in routine clinical practice for quickly and reliably detecting the tooth caries. The use of these algorithms in clinical practice can provide to important benefit to physicians as a clinical decision support system in dentistry.


Assuntos
Aprendizado Profundo , Cárie Dentária , Inteligência Artificial , Cárie Dentária/diagnóstico por imagem , Suscetibilidade à Cárie Dentária , Humanos , Radiografia Interproximal/métodos
8.
BMC Med Imaging ; 21(1): 124, 2021 08 13.
Artigo em Inglês | MEDLINE | ID: mdl-34388975

RESUMO

BACKGROUND: Panoramic radiography is an imaging method for displaying maxillary and mandibular teeth together with their supporting structures. Panoramic radiography is frequently used in dental imaging due to its relatively low radiation dose, short imaging time, and low burden to the patient. We verified the diagnostic performance of an artificial intelligence (AI) system based on a deep convolutional neural network method to detect and number teeth on panoramic radiographs. METHODS: The data set included 2482 anonymized panoramic radiographs from adults from the archive of Eskisehir Osmangazi University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology. A Faster R-CNN Inception v2 model was used to develop an AI algorithm (CranioCatch, Eskisehir, Turkey) to automatically detect and number teeth on panoramic radiographs. Human observation and AI methods were compared on a test data set consisting of 249 panoramic radiographs. True positive, false positive, and false negative rates were calculated for each quadrant of the jaws. The sensitivity, precision, and F-measure values were estimated using a confusion matrix. RESULTS: The total numbers of true positive, false positive, and false negative results were 6940, 250, and 320 for all quadrants, respectively. Consequently, the estimated sensitivity, precision, and F-measure were 0.9559, 0.9652, and 0.9606, respectively. CONCLUSIONS: The deep convolutional neural network system was successful in detecting and numbering teeth. Clinicians can use AI systems to detect and number teeth on panoramic radiographs, which may eventually replace evaluation by human observers and support decision making.


Assuntos
Redes Neurais de Computação , Radiografia Panorâmica , Dente/diagnóstico por imagem , Algoritmos , Conjuntos de Dados como Assunto , Aprendizado Profundo , Humanos , Sensibilidade e Especificidade
9.
Dentomaxillofac Radiol ; 50(6): 20200172, 2021 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-33661699

RESUMO

OBJECTIVE: This study evaluated the use of a deep-learning approach for automated detection and numbering of deciduous teeth in children as depicted on panoramic radiographs. METHODS AND MATERIALS: An artificial intelligence (AI) algorithm (CranioCatch, Eskisehir-Turkey) using Faster R-CNN Inception v2 (COCO) models were developed to automatically detect and number deciduous teeth as seen on pediatric panoramic radiographs. The algorithm was trained and tested on a total of 421 panoramic images. System performance was assessed using a confusion matrix. RESULTS: The AI system was successful in detecting and numbering the deciduous teeth of children as depicted on panoramic radiographs. The sensitivity and precision rates were high. The estimated sensitivity, precision, and F1 score were 0.9804, 0.9571, and 0.9686, respectively. CONCLUSION: Deep-learning-based AI models are a promising tool for the automated charting of panoramic dental radiographs from children. In addition to serving as a time-saving measure and an aid to clinicians, AI plays a valuable role in forensic identification.


Assuntos
Inteligência Artificial , Dente , Algoritmos , Criança , Humanos , Radiografia Panorâmica , Dente Decíduo , Turquia
10.
Curr Med Imaging ; 17(9): 1137-1141, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33563200

RESUMO

BACKGROUND: Every year, lung cancer contributes to a high percentage deaths in the world. Early detection of lung cancer is important for its effective treatment, and non-invasive rapid methods are usually used for diagnosis. INTRODUCTION: In this study, we aimed to detect lung cancer using deep learning methods and determine the contribution of deep learning to the classification of lung carcinoma using a convolutional neural network (CNN). METHODS: A total of 301 patients diagnosed with lung carcinoma pathologies in our hospital were included in the study. In the thorax, Computed Tomography (CT) was performed for diagnostic purposes prior to the treatment. After tagging the section images, tumor detection, small and non-small cell lung carcinoma differentiation, adenocarcinoma-squamous cell lung carcinoma differentiation, and adenocarcinoma-squamous cell-small cell lung carcinoma differentiation were sequentially performed using deep CNN methods. RESULTS: In total, 301 lung carcinoma images were used to detect tumors, and the model obtained with the deep CNN system exhibited 0.93 sensitivity, 0.82 precision, and 0.87 F1 score in detecting lung carcinoma. In the differentiation of small cell-non-small cell lung carcinoma, the sensitivity, precision and F1 score of the CNN model at the test stage were 0.92, 0.65, and 0.76, respectively. In the adenocarcinoma-squamous cancer differentiation, the sensitivity, precision, and F1 score were 0.95, 0.80, and 0.86, respectively. The patients were finally grouped as small cell lung carcinoma, adenocarcinoma, and squamous cell lung carcinoma, and the CNN model was used to determine whether it could differentiate these groups. The sensitivity, specificity, and F1 score of this model were 0.90, 0.44, and 0.59, respectively, in this differentiation. CONCLUSION: In this study, we successfully detected tumors and differentiated between adenocarcinoma- squamous cell carcinoma groups with the deep learning method using the CNN model. Due to their non-invasive nature and the success of the deep learning methods, they should be integrated into radiology to diagnose lung carcinoma.


Assuntos
Carcinoma de Células Escamosas , Aprendizado Profundo , Neoplasias Pulmonares , Inteligência Artificial , Carcinoma de Células Escamosas/diagnóstico por imagem , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X
11.
Acta Odontol Scand ; 79(4): 275-281, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33176533

RESUMO

OBJECTIVES: Radiological examination has an important place in dental practice, and it is frequently used in intraoral imaging. The correct numbering of teeth on radiographs is a routine practice that takes time for the dentist. This study aimed to propose an automatic detection system for the numbering of teeth in bitewing images using a faster Region-based Convolutional Neural Networks (R-CNN) method. METHODS: The study included 1125 bite-wing radiographs of patients who attended the Faculty of Dentistry of Ordu University from 2018 to 2019. A faster R-CNN an advanced object identification method was used to identify the teeth. The confusion matrix was used as a metric and to evaluate the success of the model. RESULTS: The deep CNN system (CranioCatch, Eskisehir, Turkey) was used to detect and number teeth in bitewing radiographs. Of 715 teeth in 109 bite-wing images, 697 were correctly numbered in the test data set. The F1 score, precision and sensitivity were 0.9515, 0.9293 and 0.9748, respectively. CONCLUSIONS: A CNN approach for the analysis of bitewing images shows promise for detecting and numbering teeth. This method can save dentists time by automatically preparing dental charts.


Assuntos
Inteligência Artificial , Dente , Oclusão Dentária , Humanos , Redes Neurais de Computação , Dente/diagnóstico por imagem , Turquia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...